4,705 research outputs found

    Spin geometry and conservation laws in the Kerr spacetime

    Get PDF
    In this paper we will review some facts, both classical and recent, concerning the geometry and analysis of the Kerr and related black hole spacetimes. This includes the analysis of test fields on these spacetimes. Central to our analysis is the existence of a valence (2,0)(2,0) Killing spinor, which we use to construct symmetry operators and conserved currents as well as a new energy momentum tensor for the Maxwell test fields on a class of spacetimes containing the Kerr spacetime. We then outline how this new energy momentum tensor can be used to obtain decay estimated for Maxwell test fields. An important motivation for this work is the black hole stability problem, where fields with non-zero spin present interesting new challenges. The main tool in the analysis is the 2-spinor calculus, and for completeness we introduce its main features.Comment: 30 pages. To appear in the volume "The Centenary of General Relativity" in "Surveys in Differential Geometry", edited by Lydia Bieri and Shing-Tung Yau, in the series "Surveys in Differential Geometry

    The robustness of proofreading to crowding-induced pseudo-processivity in the MAPK pathway

    Get PDF
    Double phosphorylation of protein kinases is a common feature of signalling cascades. This motif may reduce cross-talk between signalling pathways, as the second phosphorylation site allows for proofreading, especially when phosphorylation is distributive rather than processive. Recent studies suggest that phosphorylation can be `pseudo-processive' in the crowded cellular environment, as rebinding after the first phosphorylation is enhanced by slow diffusion. Here, we use a simple model with unsaturated reactants to show that specificity for one substrate over another drops as rebinding increases and pseudo-processive behavior becomes possible. However, this loss of specificity with increased rebinding is typically also observed if two distinct enzyme species are required for phosphorylation, i.e. when the system is necessarily distributive. Thus the loss of specificity is due to an intrinsic reduction in selectivity with increased rebinding, which benefits inefficient reactions, rather than pseudo-processivity itself. We also show that proofreading can remain effective when the intended signalling pathway exhibits high levels of rebinding-induced pseudo-processivity, unlike other proposed advantages of the dual phosphorylation motif.Comment: To appear in Biohys.

    Atomic Diffusion and Mixing in Old Stars V: A deeper look into the Globular Cluster NGC 6752

    Full text link
    Abundance trends in heavier elements with evolutionary phase have been shown to exist in the globular cluster NGC 6752 [Fe/H]=-1.6. These trends are a result of atomic diffusion and additional (non-convective) mixing. Studying such trends can provide us with important constraints on the extent to which diffusion modifies the internal structure and surface abundances of solar-type, metal-poor stars. Taking advantage of a larger data sample, we investigate the reality and the size of these abundance trends and address questions and potential biases associated with the various stellar populations that make up NGC6752. Based on uvby Str\"omgren photometry, we are able to separate three stellar populations in NGC 6752 along the evolutionary sequence from the base of the red giant branch down to the turnoff point. We find weak systematic abundance trends with evolutionary phase for Ca, Ti, and Fe which are best explained by stellar-structure models including atomic diffusion with efficient additional mixing. We derive a new value for the initial lithium abundance of NGC 6752 after correcting for the effect of atomic diffusion and additional mixing which falls slightly below the predicted standard BBN value. We find three stellar populations by combining photometric and spectroscopic data of 194 stars in the globular cluster NGC 6752. Abundance trends for groups of elements, differently affected by atomic diffusion and additional mixing, are identified. Although the statistical significance of the individual trends is weak, they all support the notion that atomic diffusion is operational along the evolutionary sequence of NGC 6752.Comment: 15 pages, 11 figures, 2 online table

    Coping with lists in the ifcOWL ontology

    Get PDF
    Over the past few years, several suggestions have been made of how to convert an EXPRESS schema into an OWL ontology. The conversion from EXPRESS to OWL is of particular use to architectural design and construction industry, because one of the key data models in architectural design and construction industry, namely the Industry Foundation Classes (IFC) is represented using the EXPRESS information modelling language. In each of these conversion options, the way in which lists are converted (e.g. lists of coordinates, lists of spaces in a floor) is key to the structure and eventual strength of the resulting ontology. In this article, we outline and discuss the main decisions that can be made in converting LIST concepts in EXPRESS to equivalent OWL expressions. This allows one to identify which conversion option is appropriate to support proper and efficient information reuse in the domain of architecture and construction

    Semitransparency in interaction-free measurements

    Full text link
    We discuss the effect of semitransparency in a quantum-Zeno-like interaction-free measurement setup, a quantum-physics based approach that might significantly reduce sample damage in imaging and microscopy. With an emphasis on applications in electron microscopy, we simulate the behavior of probe particles in an interaction-free measurement setup with semitransparent samples, and we show that the transparency of a sample can be measured in such a setup. However, such a measurement is not possible without losing (i.e., absorbing or scattering) probe particles in general, which causes sample damage. We show how the amount of lost particles can be minimized by adjusting the number of round trips through the setup, and we explicitly calculate the amount of lost particles in measurements which either aim at distinguishing two transparencies or at measuring an unknown transparency precisely. We also discuss the effect of the sample causing phase shifts in interaction-free measurements. Comparing the resulting loss of probe particles with a classical measurement of transparency, we find that interaction-free measurements only provide a benefit in two cases: first, if two semitransparent samples with a high contrast are to be distinguished, interaction-free measurements lose less particles than classical measurements by a factor that increases with the contrast. This implies that interaction-free measurements with zero loss are possible if one of the samples is perfectly transparent. A second case where interaction-free measurements outperform classical measurements is if three conditions are met: the particle source exhibits Poissonian number statistics, the number of lost particles cannot be measured, and the transparency is larger than approximately 1/2. In all other cases, interaction-free measurements lose as many probe particles as classical measurements or more.Comment: 11 pages, 10 figure

    Tools for urban sound quality assessment

    Get PDF

    The impact of global communication latency at extreme scales on Krylov methods

    Get PDF
    Krylov Subspace Methods (KSMs) are popular numerical tools for solving large linear systems of equations. We consider their role in solving sparse systems on future massively parallel distributed memory machines, by estimating future performance of their constituent operations. To this end we construct a model that is simple, but which takes topology and network acceleration into account as they are important considerations. We show that, as the number of nodes of a parallel machine increases to very large numbers, the increasing latency cost of reductions may well become a problematic bottleneck for traditional formulations of these methods. Finally, we discuss how pipelined KSMs can be used to tackle the potential problem, and appropriate pipeline depths
    corecore